skip to main content


Search for: All records

Creators/Authors contains: "Chen, Tingjun"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available February 15, 2025
  2. Free, publicly-accessible full text available October 16, 2024
  3. Future wireless networks need to support the increasing demands for high data rates and improved coverage. One promising solution is sectorization, where an infrastructure node (e.g., a base station) is equipped with multiple sectors employing directional communication. Although the concept of sectorization is not new, it is critical to fully understand the potential of sectorized networks, such as the rate gain achieved when multiple sectors can be simultaneously activated. In this paper, we focus on sectorized wireless networks, where sectorized infrastructure nodes with beam-steering capabilities form a multi-hop mesh network for data forwarding and routing. We present a sectorized node model and characterize the capacity region of these sectorized networks. We define the flow extension ratio and the corresponding sectorization gain, which quantitatively measure the performance gain introduced by node sectorization as a function of the network flow. Our objective is to find the optimal sectorization of each node that achieves the maximum flow extension ratio, and thus the sectorization gain. Towards this goal, we formulate the corresponding optimization problem and develop an efficient distributed algorithm that obtains the node sectorization under a given network flow with an approximation ratio of 2/3. Through extensive simulations, we evaluate the sectorization gain and the performance of the proposed algorithm in various network scenarios with varying network flows. The simulation results show that the approximate sectorization gain increases sublinearly as a function of the number of sectors per node. 
    more » « less
    Free, publicly-accessible full text available October 1, 2024
  4. Free, publicly-accessible full text available October 2, 2024
  5. Baseband processing is one of the most time-consuming and computationally expensive tasks in radio access networks (RANs), which is typically realized in dedicated hardware. The concept of virtualizing the RAN functions by moving their computation to edge data centers can significantly reduce the deployment cost and enable more flexible use of the network resources. Recent studies have focused on software-based baseband processing for large-scale sub-6 GHz MIMO systems, while 5G also embraces the millimeter-wave (mmWave) frequency bands to achieve further improved data rates leveraging the widely available spectrum. Therefore, it is important to build a platform for the experimental investigation of software-based baseband processing for mmWave MIMO systems. In this paper, we implement programmable mmWave MIMO radios equipped with real-time baseband processing capability, leveraging the open-access PAWR COSMOS testbed. We first develop Agora-UHD, which enables UHD-based software-defined radios (SDRs) to interface with Agora, an open-source software realization of real-time massive MIMO baseband processing. Next, we integrate Agora-UHD with the USRP SDRs and IBM 28 GHz phased array antenna module (PAAM) subsystem boards deployed in the PAWR COSMOS testbed. We demonstrate a 2×2 28 GHz polarization MIMO link with a bandwidth of 122.88 MHz, and show that it can meet the real-time processing deadline of 0.375 ms (3 transmission time intervals for numerology 3 in 5G NR FR2) using only 8 CPU cores. The source code of Agora-UHD and its integration with the programmable 28 GHz radios in the COSMOS testbed with example tutorials are made publicly available. 
    more » « less
    Free, publicly-accessible full text available October 1, 2024
  6. Optical networks satisfy high bandwidth and low latency requirements for telecommunication networks and data center interconnection. To improve network resource utilization, machine learning (ML) is used to accurately model optical amplifiers such as erbium-doped fiber amplifiers (EDFAs), which impact end-to-end system performance such as quality of transmission. However, a comprehensive measurement dataset is required for ML to accurately predict an EDFA’s wavelength-dependent gain. We present an open dataset consisting of 202,752 gain spectrum measurements collected from 16 commercial-grade reconfigurable optical add–drop multiplexer (ROADM) booster and pre-amplifier EDFAs under varying gain settings and diverse channel-loading configurations over 2,785 hours in total, with a total dataset size of 3.1 GB. With this EDFA dataset, we implemented component-level deep-neural-network-based EDFA models and use transfer learning (TL) to transfer the EDFA model among 16 ROADM EDFAs, which achieve less than 0.18/0.24 dB mean absolute error for booster/pre-amplifier gain prediction using only 0.5% of the full target training set. We also showed that TL reduces the EDFA data collection requirements on a new gain setting or a different type of EDFA on the same ROADM.

     
    more » « less
  7. There are increasing requirements for data center interconnection (DCI) services, which use fiber to connect any DC distributed in a metro area and quickly establish high-capacity optical paths between cloud services and mobile edge computing and the users. In such networks, coherent transceivers with various optical frequency ranges, modulators, and modulation formats installed at each connection point must be used to meet service requirements such as fast-varying traffic requests between user computing resources. This requires technology and architectures that enable users and DCI operators to cooperate to achieve fast provisioning of WDM links and flexible route switching in a short time, independent of the transceiver’s implementation and characteristics. We propose an approach to estimate the end-to-end (EtE) generalized signal-to-noise ratio (GSNR) accurately in a short time, not by measuring the GSNR at the operational route and wavelength for the EtE optical path but by simply applying a quality of transmission probe channel link by link, at a wavelength/modulation-format convenient for measurement. Assuming connections between transceivers of various frequency ranges, modulators, and modulation formats, we propose a device software architecture in which the DCI operator optimizes the transmission mode between user transceivers with high accuracy using only common parameters such as the bit error rate. In this paper, we first implement software libraries for fast WDM provisioning and experimentally build different routes to verify the accuracy of this approach. For the operational EtE GSNR measurements, the accuracy estimated from the sum of the measurements for each link was 0.6 dB, and the wavelength-dependent error was about 0.2 dB. Then, using field fibers deployed in the NSF COSMOS testbed, a Linux-based transmission device software architecture, and transceivers with different optical frequency ranges, modulators, and modulation formats, the fast WDM provisioning of an optical path was completed within 6 min.

     
    more » « less
  8. Free, publicly-accessible full text available October 1, 2024
  9. We implement and test transfer learning-based gain models across 16 ROADM EDFAs, which achieve less than 0.17/0.30 dB mean absolute error for booster/pre-amplifier gain prediction using only 0.5% of the full target EDFA dataset. 
    more » « less